13 research outputs found

    Interpreting Embedding Models of Knowledge Bases: A Pedagogical Approach

    Full text link
    Knowledge bases are employed in a variety of applications from natural language processing to semantic web search; alas, in practice their usefulness is hurt by their incompleteness. Embedding models attain state-of-the-art accuracy in knowledge base completion, but their predictions are notoriously hard to interpret. In this paper, we adapt "pedagogical approaches" (from the literature on neural networks) so as to interpret embedding models by extracting weighted Horn rules from them. We show how pedagogical approaches have to be adapted to take upon the large-scale relational aspects of knowledge bases and show experimentally their strengths and weaknesses.Comment: presented at 2018 ICML Workshop on Human Interpretability in Machine Learning (WHI 2018), Stockholm, Swede

    Updating Incoherent Credences - Extending the Dutch Strategy Argument for Conditionalization

    Get PDF
    In this paper, we ask: how should an agent who has incoherent credences update when they learn new evidence? The standard Bayesian answer for coherent agents is that they should conditionalize; however, this updating rule is not defined for incoherent starting credences. We show how one of the main arguments for conditionalization, the Dutch strategy argument, can be extended to devise a target property for updating plans that can apply to them regardless of whether the agent starts out with coherent or incoherent credences. The main idea behind this extension is that the agent should avoid updating plans that increase the possible sure loss from Dutch strategies. This happens to be equivalent to avoiding updating plans that increase incoherence according to a distance-based incoherence measure

    Probabilistic satisfiability

    No full text
    Este trabalho estuda o problema da Satisfazibilidade Probabilística (PSAT), revendo a sua solução via programação linear, além de propor novos algoritmos para resolvê-lo através da redução ao SAT. Construímos uma redução polinomial do PSAT para o SAT, chamada de Redução Canônica, codificando operações da aritmética racional em bits, como variáveis lógicas. Analisamos a complexidade computacional dessa redução e propomos uma Redução Canônica de Precisão Limitada para contornar tal complexidade. Apresentamos uma Redução de Turing do PSAT ao SAT, baseada no algoritmo Simplex e na Forma Normal Atômica que introduzimos. Sugerimos modificações em tal redução em busca de eficiência computacional. Por fim, implementamos essas reduções a m de investigar o perl de complexidade do PSAT, observamos o fenômeno de transição de fase e discutimos as condições para sua detecção.This work studies the Probabilistic Satisfiability problem (PSAT), reviewing its solution through linear programming, and proposing new algorithms to solve it. We construct a polynomial many-to-one reduction from PSAT to SAT, called Canonical Reduction, codifying rational arithmetic operations into bits, as logical variables. We analyze the computational complexity of this reduction and we propose a Limited Precision Canonical Reduction to reduce such complexity. We present a Turing Reduction from PSAT to SAT, based on the Simplex algorithm and the Atomic Normal Form we introduced. We suggest modifications in such reduction looking for computational eficiency. Finally, we implement these reductions in order to investigate the complexity profile of PSAT, the phase transition phenomenom is observed and the conditions for its detection are discussed

    Medindo inconsistência em bases de conhecimento probabilístico

    No full text
    In terms of standard probabilistic reasoning, in order to perform inference from a knowledge base, it is normally necessary to guarantee the consistency of such base. When we come across an inconsistent set of probabilistic assessments, it interests us to know where the inconsistency is, how severe it is, and how to correct it. Inconsistency measures have recently been put forward as a tool to address these issues in the Artificial Intelligence community. This work investigates the problem of measuring inconsistency in probabilistic knowledge bases. Basic rationality postulates have driven the formulation of inconsistency measures within classical propositional logic. In the probabilistic case, the quantitative character of probabilities yielded an extra desirable property: that inconsistency measures should be continuous. To attend this requirement, inconsistency in probabilistic knowledge bases have been measured via distance minimisation. In this thesis, we prove that the continuity postulate is incompatible with basic desirable properties inherited from classical logic. Since minimal inconsistent sets are the basis for some desiderata, we look for more suitable ways of localising the inconsistency in probabilistic logic, while we analyse the underlying consolidation processes. The AGM theory of belief revision is extended to encompass consolidation via probabilities adjustment. The new forms of characterising the inconsistency we propose are employed to weaken some postulates, restoring the compatibility of the whole set of desirable properties. Investigations in Bayesian statistics and formal epistemology have been interested in measuring an agent\'s degree of incoherence. In these fields, probabilities are usually construed as an agent\'s degrees of belief, determining her gambling behaviour. Incoherent agents hold inconsistent degrees of beliefs, which expose them to disadvantageous bet transactions - also known as Dutch books. Statisticians and philosophers suggest measuring an agent\'s incoherence through the guaranteed loss she is vulnerable to. We prove that these incoherence measures via Dutch book are equivalent to inconsistency measures via distance minimisation from the AI community.Em termos de raciocínio probabilístico clássico, para se realizar inferências de uma base de conhecimento, normalmente é necessário garantir a consistência de tal base. Quando nos deparamos com um conjunto de probabilidades que são inconsistentes entre si, interessa-nos saber onde está a inconsistência, quão grave esta é, e como corrigi-la. Medidas de inconsistência têm sido recentemente propostas como uma ferramenta para endereçar essas questões na comunidade de Inteligência Artificial. Este trabalho investiga o problema da medição de inconsistência em bases de conhecimento probabilístico. Postulados básicos de racionalidade têm guiado a formulação de medidas de inconsistência na lógica clássica proposicional. No caso probabilístico, o carácter quantitativo da probabilidade levou a uma propriedade desejável adicional: medidas de inconsistência devem ser contínuas. Para atender a essa exigência, a inconsistência em bases de conhecimento probabilístico tem sido medida através da minimização de distâncias. Nesta tese, demonstramos que o postulado da continuidade é incompatível com propriedades desejáveis herdadas da lógica clássica. Como algumas dessas propriedades são baseadas em conjuntos inconsistentes minimais, nós procuramos por maneiras mais adequadas de localizar a inconsistência em lógica probabilística, analisando os processos de consolidação subjacentes. A teoria AGM de revisão de crenças é estendida para englobar a consolidação pelo ajuste de probabilidades. As novas formas de caracterizar a inconsistência que propomos são empregadas para enfraquecer alguns postulados, restaurando a compatibilidade de todo o conjunto de propriedades desejáveis. Investigações em estatística Bayesiana e em epistemologia formal têm se interessado pela medição do grau de incoerência de um agente. Nesses campos, probabilidades são geralmente interpretadas como graus de crença de um agente, determinando seu comportamento em apostas. Agentes incoerentes possuem graus de crença inconsistentes, que o expõem a transações de apostas desvantajosas - conhecidas como Dutch books. Estatísticos e filósofos sugerem medir a incoerência de um agente através do prejuízo garantido a qual ele está vulnerável. Nós provamos que estas medidas de incoerência via Dutch books são equivalentes a medidas de inconsistência via minimização de distâncias da comunidade de IA

    Medindo inconsistência em bases de conhecimento probabilístico

    No full text
    In terms of standard probabilistic reasoning, in order to perform inference from a knowledge base, it is normally necessary to guarantee the consistency of such base. When we come across an inconsistent set of probabilistic assessments, it interests us to know where the inconsistency is, how severe it is, and how to correct it. Inconsistency measures have recently been put forward as a tool to address these issues in the Artificial Intelligence community. This work investigates the problem of measuring inconsistency in probabilistic knowledge bases. Basic rationality postulates have driven the formulation of inconsistency measures within classical propositional logic. In the probabilistic case, the quantitative character of probabilities yielded an extra desirable property: that inconsistency measures should be continuous. To attend this requirement, inconsistency in probabilistic knowledge bases have been measured via distance minimisation. In this thesis, we prove that the continuity postulate is incompatible with basic desirable properties inherited from classical logic. Since minimal inconsistent sets are the basis for some desiderata, we look for more suitable ways of localising the inconsistency in probabilistic logic, while we analyse the underlying consolidation processes. The AGM theory of belief revision is extended to encompass consolidation via probabilities adjustment. The new forms of characterising the inconsistency we propose are employed to weaken some postulates, restoring the compatibility of the whole set of desirable properties. Investigations in Bayesian statistics and formal epistemology have been interested in measuring an agent\'s degree of incoherence. In these fields, probabilities are usually construed as an agent\'s degrees of belief, determining her gambling behaviour. Incoherent agents hold inconsistent degrees of beliefs, which expose them to disadvantageous bet transactions - also known as Dutch books. Statisticians and philosophers suggest measuring an agent\'s incoherence through the guaranteed loss she is vulnerable to. We prove that these incoherence measures via Dutch book are equivalent to inconsistency measures via distance minimisation from the AI community.Em termos de raciocínio probabilístico clássico, para se realizar inferências de uma base de conhecimento, normalmente é necessário garantir a consistência de tal base. Quando nos deparamos com um conjunto de probabilidades que são inconsistentes entre si, interessa-nos saber onde está a inconsistência, quão grave esta é, e como corrigi-la. Medidas de inconsistência têm sido recentemente propostas como uma ferramenta para endereçar essas questões na comunidade de Inteligência Artificial. Este trabalho investiga o problema da medição de inconsistência em bases de conhecimento probabilístico. Postulados básicos de racionalidade têm guiado a formulação de medidas de inconsistência na lógica clássica proposicional. No caso probabilístico, o carácter quantitativo da probabilidade levou a uma propriedade desejável adicional: medidas de inconsistência devem ser contínuas. Para atender a essa exigência, a inconsistência em bases de conhecimento probabilístico tem sido medida através da minimização de distâncias. Nesta tese, demonstramos que o postulado da continuidade é incompatível com propriedades desejáveis herdadas da lógica clássica. Como algumas dessas propriedades são baseadas em conjuntos inconsistentes minimais, nós procuramos por maneiras mais adequadas de localizar a inconsistência em lógica probabilística, analisando os processos de consolidação subjacentes. A teoria AGM de revisão de crenças é estendida para englobar a consolidação pelo ajuste de probabilidades. As novas formas de caracterizar a inconsistência que propomos são empregadas para enfraquecer alguns postulados, restaurando a compatibilidade de todo o conjunto de propriedades desejáveis. Investigações em estatística Bayesiana e em epistemologia formal têm se interessado pela medição do grau de incoerência de um agente. Nesses campos, probabilidades são geralmente interpretadas como graus de crença de um agente, determinando seu comportamento em apostas. Agentes incoerentes possuem graus de crença inconsistentes, que o expõem a transações de apostas desvantajosas - conhecidas como Dutch books. Estatísticos e filósofos sugerem medir a incoerência de um agente através do prejuízo garantido a qual ele está vulnerável. Nós provamos que estas medidas de incoerência via Dutch books são equivalentes a medidas de inconsistência via minimização de distâncias da comunidade de IA

    Algorithms for Deciding Counting Quantifiers over Unary Predicates

    No full text
    We study algorithms for fragments of first order logic ex- tended with counting quantifiers, which are known to be highly complex in general. We propose a fragment over unary predicates that is NP-complete and for which there is a nor- mal form where Counting Quantification sentences have a single Unary predicate, thus call it the CQU fragment. We provide an algebraic formulation of the CQU satisfiability problem in terms of Integer Linear Programming based on which two algorithms are proposed, a direct reduction to SAT instances and an Integer Linear Programming version extended with a column generation mechanism. The latter is shown to lead to a viable implementation and experiments shows this algorithm presents a phase transition behavior

    On the Coherence of Probabilistic Relational Formalisms

    No full text
    There are several formalisms that enhance Bayesian networks by including relations amongst individuals as modeling primitives. For instance, Probabilistic Relational Models (PRMs) use diagrams and relational databases to represent repetitive Bayesian networks, while Relational Bayesian Networks (RBNs) employ first-order probability formulas with the same purpose. We examine the coherence checking problem for those formalisms; that is, the problem of guaranteeing that any grounding of a well-formed set of sentences does produce a valid Bayesian network. This is a novel version of de Finetti’s problem of coherence checking for probabilistic assessments. We show how to reduce the coherence checking problem in relational Bayesian networks to a validity problem in first-order logic augmented with a transitive closure operator and how to combine this logic-based approach with faster, but incomplete algorithms

    Classifying Inconsistency Measures Using Graphs

    Get PDF
    International audienceThe aim of measuring inconsistency is to obtain an evaluation of the imperfections in a set of formulas, and this evaluation may then be used to help decide on some course of action (such as rejecting some of the formulas, resolving the inconsistency, seeking better sources of information, etc). A number of proposals have been made to define measures of inconsistency. Each has its rationale. But to date, it is not clear how to delineate the space of options for measures, nor is it clear how we can classify measures systematically. To address these problems, we introduce a general framework for comparing syntactic measures of inconsistency. It is based on the notion of an inconsistency graph for each knowledgebase (a bipartite graph with a set of vertices representing formulas in the knowledgebase, a set of vertices representing minimal inconsistent subsets of the knowledgebase, and edges representing that a formula belongs to a minimal inconsistent subset). We then show that various measures can be computed using the inconsistency graph. Then we introduce abstractions of the inconsistency graph and use them to construct a hierarchy of syntactic inconsistency measures. Furthermore, we extend the inconsistency graph concept with a labeling that extends the hierarchy to include some other types of inconsistency measures
    corecore